6 December 2024
AI, Media and Democracy Lab: Why Bother with AI Transparency?
Key Concerns
Participants expressed concerns about AI in journalism, including:
One participant summed it up: “If you can’t tell what’s real or fake, it could lead to distrust in everything we read.”
The Desire for Disclosure
Despite concerns, participants strongly favored transparency. They expressed a clear desire for AI-generated content to be distinctly labeled, with many calling for:
Recommendations for News Organizations
From these findings, three key takeaways emerged:
- Transparency is essential: Clearly label AI-generated content to address public concerns.
- Tailored disclosures: Different audiences may need varying levels of detail about AI involvement.
- Build digital literacy: Combine transparency with education to empower readers in navigating AI content.
While transparency alone won’t solve all trust issues in journalism, it’s a critical step. As one participant suggested, “I don’t mind AI-written articles, but I want to know. Just be clear.”As AI adoption continues to grow, news organizations must balance innovation with accountability, ensuring audiences remain informed and engaged.
This research reflects an important step toward understanding how transparency can rebuild trust in AI-driven journalism.Stay connected with the AI, Media and Democracy Lab.
Vergelijkbaar >
Similar news items
7 December 2024
Booking.com in the top 3 of the Dutch R&D Top 30
read more >
6 December 2024
ELLIS Unit Amsterdam Hosts Third Annual NeurIPS Fest
read more >
6 December 2024
Advanced AI for Surveillance Robots
read more >